recurrent connection
- Asia > China > Beijing > Beijing (0.05)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > Middle East > Qatar > Ad-Dawhah > Doha (0.04)
- North America > United States (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Beijing > Beijing (0.04)
Algorithm 1 Learning the external stimulus s Require: (x
Figure taken and adapted from [38]. Different works consider different properties. Compared to backpropagation (BP), predictive coding (PC) allows for more flexibility in the definition, training, and evaluation of the model. The experiments reported in this paper show the best results achieved on each specific task and, as a consequence, only the effects of a specific set of hyperparameters. Feedforward networks (left) simply overfit (i.e., reproduce without performing any modification) the input samples, despite being unrelated to the training data.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Kansas > Rawlins County (0.04)
- Europe > Austria > Vienna (0.04)
- North America > United States > Pennsylvania (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- North America > United States (0.46)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This submission describes a novel autoencoder method, that uses unsupervised learning to configure a recurrent network to encode both the current and past states of an input. I am not a mathematician nor machine learning expert, and thus am not qualified to review the work for technical merit. However, I have extensive experience in neural network modeling, and thus appreciate both the objective and purported accomplishments: the ability to train a recurrent network to store input sequences in an efficient manner using non-supervised learning. The authors describe a mechanism that addresses the problem by breaking it into two stages -- autoencoding, and then optimization -- that are carried out over different times scales.
Export Reviews, Discussions, Author Feedback and Meta-Reviews
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Summary: This paper deals with sampling methods based on linear rate-based neural-networks. First, it shows that symmetric weights (a common constraint in many models) significantly hurt the mixing rate. Then it shows that a (more physiological) non-normal network can have a much faster mixing rate, if the connectivity is optimized for this purpose. This works even if more biological constraints (Dale's law) are imposed.